Federated learning (FL) is an emerging machine learning paradigm, in which clients jointly learn a model with the help of a cloud server. A fundamental challenge of FL is that the clients are often heterogeneous, e.g., they have different computing powers, and thus the clients may send model updates to the server with substantially different delays. Asynchronous FL aims to address this challenge by enabling the server to update the model once any client's model update reaches it without waiting for other clients' model updates. However, like synchronous FL, asynchronous FL is also vulnerable to poisoning attacks, in which malicious clients manipulate the model via poisoning their local data and/or model updates sent to the server. Byzantine-robust FL aims to defend against poisoning attacks. In particular, Byzantine-robust FL can learn an accurate model even if some clients are malicious and have Byzantine behaviors. However, most existing studies on Byzantine-robust FL focused on synchronous FL, leaving asynchronous FL largely unexplored. In this work, we bridge this gap by proposing AFLGuard, a Byzantine-robust asynchronous FL method. We show that, both theoretically and empirically, AFLGuard is robust against various existing and adaptive poisoning attacks (both untargeted and targeted). Moreover, AFLGuard outperforms existing Byzantine-robust asynchronous FL methods.
translated by 谷歌翻译
Classifiers in supervised learning have various security and privacy issues, e.g., 1) data poisoning attacks, backdoor attacks, and adversarial examples on the security side as well as 2) inference attacks and the right to be forgotten for the training data on the privacy side. Various secure and privacy-preserving supervised learning algorithms with formal guarantees have been proposed to address these issues. However, they suffer from various limitations such as accuracy loss, small certified security guarantees, and/or inefficiency. Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data. Given a pre-trained encoder as a feature extractor, supervised learning can train a simple yet accurate classifier using a small amount of labeled training data. In this work, we perform the first systematic, principled measurement study to understand whether and when a pre-trained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms. Our key findings are that a pre-trained encoder substantially improves 1) both accuracy under no attacks and certified security guarantees against data poisoning and backdoor attacks of state-of-the-art secure learning algorithms (i.e., bagging and KNN), 2) certified security guarantees of randomized smoothing against adversarial examples without sacrificing its accuracy under no attacks, 3) accuracy of differentially private classifiers, and 4) accuracy and/or efficiency of exact machine unlearning.
translated by 谷歌翻译
Monocular 3D human pose estimation is quite challenging due to the inherent ambiguity and occlusion, which often lead to high uncertainty and indeterminacy. On the other hand, diffusion models have recently emerged as an effective tool for generating high-quality images from noise. Inspired by their capability, we explore a novel pose estimation framework (DiffPose) that formulates 3D pose estimation as a reverse diffusion process. We incorporate novel designs into our DiffPose that facilitate the diffusion process for 3D pose estimation: a pose-specific initialization of pose uncertainty distributions, a Gaussian Mixture Model-based forward diffusion process, and a context-conditioned reverse diffusion process. Our proposed DiffPose significantly outperforms existing methods on the widely used pose estimation benchmarks Human3.6M and MPI-INF-3DHP.
translated by 谷歌翻译
This paper studies how to flexibly integrate reconstructed 3D models into practical 3D modeling pipelines such as 3D scene creation and rendering. Due to the technical difficulty, one can only obtain rough 3D models (R3DMs) for most real objects using existing 3D reconstruction techniques. As a result, physically-based rendering (PBR) would render low-quality images or videos for scenes that are constructed by R3DMs. One promising solution would be representing real-world objects as Neural Fields such as NeRFs, which are able to generate photo-realistic renderings of an object under desired viewpoints. However, a drawback is that the synthesized views through Neural Fields Rendering (NFR) cannot reflect the simulated lighting details on R3DMs in PBR pipelines, especially when object interactions in the 3D scene creation cause local shadows. To solve this dilemma, we propose a lighting transfer network (LighTNet) to bridge NFR and PBR, such that they can benefit from each other. LighTNet reasons about a simplified image composition model, remedies the uneven surface issue caused by R3DMs, and is empowered by several perceptual-motivated constraints and a new Lab angle loss which enhances the contrast between lighting strength and colors. Comparisons demonstrate that LighTNet is superior in synthesizing impressive lighting, and is promising in pushing NFR further in practical 3D modeling workflows. Project page: https://3d-front-future.github.io/LighTNet .
translated by 谷歌翻译
无人驾驶汽车(UAV)具有各种优势,但是它们的实际应用受其能源有限的影响。因此,管理其功耗很重要,并且建立相应的功耗模型也很重要。但是,大多数现有作品要么为固定翼无人机和单权无人机建立理论功耗模型,要么为无需严格的数学推导而为多旋转无人机提供启发式功耗模型。本文旨在为多旋转无人机建立理论功耗模型。具体而言,通过利用单旋风无人机与多机无人机之间的关系,得出了三个飞行状态的多旋转无人机的封闭形式消耗模型,即远南飞行,垂直上升和垂直下降。功耗条款。在此基础上,在三维(3-D)方案中,无人机的通用飞行功耗模型。通过使用DJI M210和DJI移动SDK在实际场景中制作的移动应用程序进行广泛的实验,并确认这些模型的正确性和有效性;此外,进行模拟以进一步研究转子数量对无人机的功耗的影响。拟议的功耗模型不仅揭示了多旋转无人机的功耗如何受到各种因素的影响,而且还为引入其他新型应用程序铺平了道路。
translated by 谷歌翻译
联合学习(FL)容易受到模型中毒攻击的影响,在该攻击中,恶意客户通过将操纵模型更新发送到服务器来破坏全局模型。现有的防御措施主要依靠拜占庭式抗体方法,即使某些客户是恶意的,旨在学习准确的全球模型。但是,在实践中,他们只能抵抗少数恶意客户。如何与大量恶意客户抗衡模型中毒攻击仍然是一个公开挑战。我们的fldetector通过检测恶意客户来应对这一挑战。 FLDETECTOR旨在检测和删除大多数恶意客户,以便拜占庭式的fl方法可以使用其余客户学习准确的全球模型。我们的主要观察结果是,在模型中毒攻击中,在多次迭代中的客户更新的模型更新是不一致的。因此,FLDetector通过检查其模型更高的一致性来检测恶意客户端。大致来说,服务器根据其历史模型更新使用Cauchy Mean Valie Therorem和L-BFG预测客户端的模型更新在多个迭代中不一致。我们在三个基准数据集上进行的广泛实验表明,FLDETECTOR可以准确检测到多种最新模型中毒攻击中的恶意客户。在删除了被检测到的恶意客户端后,现有的拜占庭式FL方法可以学习准确的全球模型。
translated by 谷歌翻译
人重新识别(REID)的域概括(DG)是一个具有挑战性的问题,因为在培训过程中无法访问允许的目标域数据。大多数现有的DG REID方法都采用相同的功能来更新功能提取器和分类器参数。这种常见的实践导致模型过度拟合了源域中的现有特征样式,即使使用元学习,也会在目标域上对目标域的概括概括能力。为了解决这个问题,我们提出了一种新型的交织方式学习框架。与传统的学习策略不同,交织的学习结合了两个远期传播和每个迭代的后退传播。我们采用交错样式的功能,使用不同的前向传播来更新功能提取器和分类器,这有助于模型避免过度适应某些域样式。为了充分探索风格交织的学习的优势,我们进一步提出了一种新颖的功能风格化方法来多样化功能样式。这种方法不仅混合了多个培训样本的功能样式,还可以从批处理级别的样式发行中示例新的和有意义的功能样式。广泛的实验结果表明,我们的模型始终优于DG REID大规模基准的最先进方法,从而在计算效率方面具有明显的优势。代码可从https://github.com/wentaotan/interleaved-learning获得。
translated by 谷歌翻译
Contrastive learning pre-trains an image encoder using a large amount of unlabeled data such that the image encoder can be used as a general-purpose feature extractor for various downstream tasks. In this work, we propose PoisonedEncoder, a data poisoning attack to contrastive learning. In particular, an attacker injects carefully crafted poisoning inputs into the unlabeled pre-training data, such that the downstream classifiers built based on the poisoned encoder for multiple target downstream tasks simultaneously classify attacker-chosen, arbitrary clean inputs as attacker-chosen, arbitrary classes. We formulate our data poisoning attack as a bilevel optimization problem, whose solution is the set of poisoning inputs; and we propose a contrastive-learning-tailored method to approximately solve it. Our evaluation on multiple datasets shows that PoisonedEncoder achieves high attack success rates while maintaining the testing accuracy of the downstream classifiers built upon the poisoned encoder for non-attacker-chosen inputs. We also evaluate five defenses against PoisonedEncoder, including one pre-processing, three in-processing, and one post-processing defenses. Our results show that these defenses can decrease the attack success rate of PoisonedEncoder, but they also sacrifice the utility of the encoder or require a large clean pre-training dataset.
translated by 谷歌翻译
预训练的编码器是通用特征提取器,可用于许多下游任务。自我监督学习的最新进展可以使用大量未标记的数据预先培训高效编码器,从而导致新兴编码器作为服务(EAAS)。预先训练的编码器可能被视为机密性,因为其培训需要大量数据和计算资源及其公开发布可能有助于滥用AI,例如,以进行深层效果。在本文中,我们提出了第一次称为Stolenencoder的攻击,以窃取预训练的图像编码器。我们评估了由我们自己预先训练的多个目标编码器和三个现实世界目标编码器的stolenencoder,包括由Google预先培训的Imagenet编码器,由OpenAI预先培训的剪辑编码器以及Clarifai的一般嵌入式编码器部署为付费EAAS。我们的结果表明,我们被盗的编码器与目标编码器具有相似的功能。特别是,构建在目标编码器和被盗的下游分类器具有相似的精度。此外,使用StolenenCoder窃取目标编码器所需的数据和计算资源要比从头开始进行预训练要少得多。我们还探索了三个防御能力,这些防御能力扰动目标编码器产生的矢量。我们的结果表明,这些防御措施不足以减轻Stolenencoder。
translated by 谷歌翻译
核细胞分割是数字病理分析中的基本任务,可以通过基于深度学习的方法自动化。然而,这种自动化方法的发展需要大量数据具有精确的注释掩模,这很难获得。具有弱标记数据的培训是减少注释工作量的流行解决方案。在本文中,我们提出了一种新的基于元学习的核细胞分段方法,其跟随标签校正范例,以利用嘈杂的面具利用数据。具体而言,我们设计一个完全传统的元模型,可以使用少量清洁的元数据来纠正嘈杂的掩模。然后,纠正的掩模可用于监督分割模型的训练。同时,采用双级优化方法来交替地以端到端的方式更新主要分段模型和元模型的参数。两个核细分数据集的广泛实验结果表明,我们的方法实现了最先进的结果。它甚至可以在一些嘈杂的设置中实现了对监督数据的模型培训相当的性能。
translated by 谷歌翻译